Code
# Load libraries
library(readr)
library(skimr)
library(broom)
library(dplyr)
library(ggplot2)
library(kableExtra)
library(knitr)
library(mgcViz)
library(mgcv)
library(MASS)
library(tidyr)This tutorial follows from Section 2. “Modeling considerations when estimating CDD” of the main text.
In this section, we examine a subset of the Barro Colorado Island (BCI) 50-ha plot seedling data to illustrate the impact of conspecific density on mortality probability. We calculate the Average Marginal Effect (more about marginal effects in general here) as our metric of the strength conspecific density dependence. We then estimate both the absolute Average Marginal Effect (aAME) and the relative Average Marginal Effect (rAME). The aAME represents the average absolute change in mortality probability due to a specified increase in conspecific density. In contrast, the rAME is the relative change in mortality probability compared to a baseline value.
The standard approach for modeling plant CDD patterns presumes a linear relationship between performance (e.g., mortality) and conspecific neighborhood density metrics. However, in this section, we use ‘Generalized Additive Models (GAMs)’ as they offer flexibility for non-linear relationships between performance and predictors when empirically suported.
Note: The code is adapted from (Hülsmann et al. 2024) and the latitudinalCNDD repository by Lisa Hülsmann.
# Load libraries
library(readr)
library(skimr)
library(broom)
library(dplyr)
library(ggplot2)
library(kableExtra)
library(knitr)
library(mgcViz)
library(mgcv)
library(MASS)
library(tidyr)Our demonstration here used seedling data. We utilized a subset of the Barro Colorado Island (BCI) 50-ha forest dynamics plot seedling data, encompassing only 30 species. Seedlings are censused in 1x1 m plots, distributed at 5-m intervals throughout the 50-ha plot. In this example, neighbor densities are calculated using the number of conspecific and heterospecific seedling neighbors within the same 1x1m plot as the focal individual. Please note that this analysis is solely for demonstration purposes; hence, no biological conclusions should be drawn from the results due to the limited data subset used.
The code below assumes the data is in a format where each row is an observation for an individual from a census. For this particular data set, the column descriptions are as follows:
Id: unique identifier for each seedling
plot: location
spp: species
date: date of observation
year: year of observation
census: census number
status: status at census, 0 = alive, 1 = dead
height.last.census: height.last.census
con_dens: number of conspecific neighbors within each seedling plot
total_dens: number of total neighbors
het_dens: number of heterospecifics neighbors
time.since.last.census: time interval between censuses
data_BCI <- read_csv("./data/BCI_seedling_data_30_spp_2023.csv")
colnames(data_BCI) [1] "id" "q20"
[3] "plot" "tag"
[5] "spp" "date"
[7] "year" "census"
[9] "status" "height.last.census"
[11] "height.last.census.log.scaled" "con_dens"
[13] "total_dens" "het_dens"
[15] "time.since.last.census"
# make sure variables are taken as factor
data_BCI$census <- factor(data_BCI$census)
data_BCI$spp <- factor(data_BCI$spp)
data_BCI$plot <- factor(data_BCI$plot)
data_BCI$height=data_BCI$height.last.census
data_BCI$interval=as.numeric(data_BCI$time.since.last.census)Let’s take a quick look at the data set we’ll be working with:
# Exploring data
# visualize
ggplot(data_BCI, aes(x = con_dens)) +
geom_histogram(binwidth = 1, color = "black", fill = "steelblue2") +
labs(x = "Conspecific density", y = "Count",
title = "Conspecific densities") +
theme_bw(12)We categorize a species as data deficient if it has fewer than four unique conspecific density values. This threshold is important because estimating a reliable “slope” requires at least a few different values along the variable of interest. At the end of this section of the code, a data frame named ‘nsp’ will be generated. This data frame will classify each species as either “data deficient” or “not data deficient”.
From our data set 9 species out of 30 were assigned as data deficient. Here, we used the thresholds of 4 unique values of conspecific densities and a minimum range of 1 for conspecific as our thresholds for what constitutes a data deficient species.
nval = 4 ## this is the number of 'unique' conspecific values
minrange = 1 # minimum range for conspecific density
data_BCI %>%
group_by(spp) %>%
summarise(
range_con_dens = max(con_dens) - min(con_dens),
max_con_dens = max(con_dens),
unique_con_dens = length(unique(con_dens)),
unique_total_dens = length(unique(total_dens)),
unique_height = length(unique(height.last.census))
) %>%
# check if conspecific less than defined nval
mutate(issue_nval = unique_con_dens < nval,
# range should at least be equal to minrange
issue_range = range_con_dens < minrange,
trymodel = !(issue_nval|issue_range),
# Assignment of "data deficient" species
data_deficient = !trymodel
) -> nsp # Store the resulting dataframe in the object 'nsp'
# Visualize the top rows of the table 'nsp' in a formatted manner
head(nsp)# A tibble: 6 × 10
spp range_con_dens max_con_dens unique_con_dens unique_total_dens
<fct> <dbl> <dbl> <int> <int>
1 ACALDI 7 7 8 35
2 AEGICE 4 4 5 28
3 BEILPE 98 98 69 72
4 CAPPFR 5 5 6 52
5 CECRIN 7 7 7 20
6 CORDLA 4 4 5 32
# ℹ 5 more variables: unique_height <int>, issue_nval <lgl>, issue_range <lgl>,
# trymodel <lgl>, data_deficient <lgl>
In the context of the Janzen-Connell Hypothesis, we focus on the differences between CDD and HDD (‘Stabilizing effect’), where conspecific neighbors’ negative effects surpass those from heterospecifics, leading to population stabilization (Broekman et al. 2019). This effect is vital for estimating a species’ self-limitation.
In this section we quantify the conspecific density impact, using a model with conspecific density and total density. By using total density in the model instead of heterospecific density, the estimated effect (slope) of conspecific density in our analysis corresponds to the result of subtracting HDD from CDD (Hülsmann et al. 2024).
We use here a Generalized Additive Model (GAM) with a complementary log-log (cloglog) link function to model the seedling status (‘alive’ or ‘dead’) as a function of conspecific density con_dens, total density total_dens and tree height or size of the focal individual (e.g., height or dbh), the latter serving as a covariate.
The cloglog link allows accounting for differences in observation time Δ𝑡 through an offset term, applicable to datasets where (0=alive and 1=dead)(Currie 2016).In other words, to use a cloglog link to account for differences in census interval length, you must be modeling mortality, not survival. (more about cloglog link here - section 3.2)
Next, we specify the smooth term in the model formula. The k value determines the complexity of the smooth. If k exceeds the number of unique values in a variable, it’s adjusted to be two less than that number. We also monitor model convergence and any warnings that may arise. With this setup, we establish ‘k=2’ as the minimum value since ‘k=1’ results in a linear relationship. Hence, we defined in the previous section a minimum of 4 unique values for conspecific densities as threshold, as setting a threshold of 3 would enforce linearity in the model.
# Define a function to fit a model to the given data
model_fit = function(data, spinfo, reduced = F) {
# create new factor with correct factor levels per species
data$census = factor(data$census)
# # Determine if there's more than one unique value in 'census'
# and construct the relevant term for the model formula
term_c = ifelse(length(unique(data$census)) > 1,
"+ s(census, bs = 're')", "")
#term_p = "+ s(plot, bs = 're')"
if (reduced) {
form = as.formula(paste0("status ~ s(height, k = k1) +
s(total_dens, k = k2)"
, term_c)) # reduced model #,term_p
} else {
form = as.formula(paste0("status ~ s(height, k = k1) +
s(total_dens, k = k2) +
s(con_dens, k = k3)"
, term_c)) # full model #, term_p
}
# Choose penalty
# set to default k=10
k1 = k2 = k3 = 10
if (k1 > spinfo$unique_height) k1 = spinfo$unique_height - 2
if (k2 > spinfo$unique_total_dens) k2 = spinfo$unique_total_dens - 2
if (k3 > spinfo$unique_con_dens) k3 = spinfo$unique_con_dens - 2
# k = 1 would force linearity for that term, and we aim to consider
# also potential non-linear relationships, so conspecific k is k=2
# minimum.
# Fit the Generalized Additive Model (GAM)
mod = try(gam(form
, family = binomial(link=cloglog)
, offset = log(interval)
, data = data
, method = "REML"
) , silent = T
)
# Return the fitted model
return(mod)
}
# check model run
# Define a function to check the convergence of the model
model_convergence = function(model) {
# gam not available
if (!any(class(model)=="gam")) {
print(paste(spp, "gam failed"))
} else {
# gam not converged
if (!model$converged) {
print(paste(spp, "no convergence"))
} else {
# Explore warning "glm.fit: fitted probabilities numerically 0 or 1
# occurred (complete separation)"
eps <- 10 * .Machine$double.eps
glm0.resids <- augment(x = model) %>%
mutate(p = 1 / (1 + exp(-.fitted)),
warning = p > 1-eps,
influence = order(.hat, decreasing = T))
infl_limit = round(nrow(glm0.resids)/10, 0)
# Check if any of the warnings correspond to the 10% most
# influential observations. If not, then it is okay.
num = any(glm0.resids$warning & glm0.resids$influence <
infl_limit)
# If there's complete separation
if (num) {
print(paste(spp, "complete separation is likely"))
} else {
# Check if the Vc component of the model is missing
if (is.null(model$Vc)) {
print(paste(spp, "Vc not available"))
} else {
# If everything is fine, return the model
return(model)
}
}
}
}
}Here we fit the models for all species. Data deficient species are treated as one group since there is not sufficient data to be treated independently. In this dataset, we have 9 species that are flagged as ‘data deficient’,
# Check the distribution of species marked as 'data deficient'
# (T or F)
# and determine the number of species for which we will try the model
# We have 9 species that are flagged as 'data deficient'
table(data_deficient = nsp$data_deficient, trymodel = nsp$trymodel) trymodel
data_deficient FALSE TRUE
FALSE 0 21
TRUE 9 0
# Convert spp to character
data_BCI$spp <- as.character(data_BCI$spp)
# Extract species that are flagged as 'data deficient' from the nsp
# dataframe
data_deficient_species <- nsp$spp[nsp$data_deficient == TRUE]
# Replace species names with "data_deficient_seedling" for those
# flagged as 'data_deficient'
data_BCI2 <- data_BCI %>%
mutate(spp = ifelse(spp %in% data_deficient_species,
"data_deficient_seedling", spp))
# # Summarize attributes for each species in the modified dataframe
# (including the new #"data_deficient_seedling" group)
data_BCI2 %>%
group_by(spp) %>%
summarise(
range_con_dens = max(con_dens, na.rm = TRUE) - min(con_dens,
na.rm = TRUE),
max_con_dens = max(con_dens, na.rm = TRUE),
unique_con_dens = length(unique(con_dens)),
unique_total_dens = length(unique(total_dens)),
unique_height = length(unique(height.last.census))
) %>%
mutate(
# less than nval unique values in consp densities
issue_nval = unique_con_dens < nval,
# range should at least be equal to minrange
issue_range = range_con_dens < minrange,
trymodel = !(issue_nval | issue_range),
# preliminary assignment of data_deficient species
data_deficient = !trymodel
) -> nsp_data_deficient
####
## Fit model for each species
###
# Create lists to store results of the main and reduced model fits
# List for main model fits
res_mod = list()
# List for reduced model fits (for calculating Pseudo R2)
res_red_mod = list()
# Loop through species in the nsp_data_deficient dataframe for which
# we will try modeling
# (Here, the group of data_deficient species is treated as a single
# species)
for(spp in nsp_data_deficient$spp[nsp_data_deficient$trymodel]) {
# select data for individual species
dat_sp = data_BCI2[data_BCI2$spp == spp, ]
# Fit the main and reduced models for the current species
mod = model_fit(data = dat_sp,
spinfo = nsp_data_deficient[nsp_data_deficient$spp
== spp, ])
mod_red = model_fit(data = dat_sp,
spinfo = nsp_data_deficient[nsp_data_deficient$spp
== spp, ], reduced = T)
# Check the convergence of both models
res = model_convergence(model = mod)
res_red = model_convergence(model = mod_red)
# save result
if (is.character(res)) {
nsp$data_deficient[nsp$spp == spp] = T
} else {
res_mod[[spp]] = res
res_red_mod[[spp]] = res_red
}
}Regression table via broom::tidy()
coefs = lapply(res_mod, broom::tidy)
coefs = Map(cbind, coefs, sp = names(coefs))
coefs = do.call(rbind, coefs)Model summary via broom::glance()
# For each model result in 'res_mod', extract summary statistics
# (like log-likelihood, AIC, BIC #, etc.) using the 'glance' function
# from the 'broom' package
# df logLik AIC BIC deviance df.residuals nobs
sums = lapply(res_mod, broom::glance)
sums = Map(cbind, sums, sp = names(sums))
sums = do.call(rbind, sums)
head(sums) df logLik AIC BIC deviance df.residual nobs
ACALDI 7.738186 -428.3495 876.0896 924.9334 856.6990 1131.2618 1139
AEGICE 8.763874 -524.7678 1070.5298 1123.3674 1049.5355 1125.2361 1134
BEILPE 17.776666 -11142.3710 22323.8580 22478.1537 22284.7420 19697.2233 19715
CAPPFR 10.232458 -2024.4980 4075.8754 4174.3285 4048.9960 11210.7675 11221
CECRIN 6.221103 -143.4318 301.0402 326.2521 286.8635 252.7789 259
CORDLA 6.840230 -531.0591 1079.4468 1125.3893 1062.1182 1477.1598 1484
adj.r.squared npar sp
ACALDI 0.019973930 37 ACALDI
AEGICE 0.049305090 34 AEGICE
BEILPE 0.069072474 41 BEILPE
CAPPFR 0.008391332 35 CAPPFR
CECRIN 0.118798167 26 CECRIN
CORDLA 0.015528602 34 CORDLA
# AUC
aucs = lapply(res_mod, function(x) {
roc <- performance::performance_roc(x, new_data = x$model)
bayestestR::area_under_curve(roc$Spec, roc$Sens)
})
sums$AUC = unlist(aucs)
# Pseudo R2
sums$pseudoR2 = 1 - (unlist(lapply(res_mod, function(x) x$deviance)) /
unlist(lapply(res_red_mod, function(x)
x$deviance)))# plot splines in pdf ----------------------------------------------
# # Specify the name of the PDF file where the plots will be saved
pdf_file <- "mortality.pdf"
# Open the PDF file for writing
pdf(pdf_file)
# Loop through all the model results stored in 'res_mod'
for (i in 1:length(res_mod)) {
# Get the vizmod for the current species
vizmod <- getViz(res_mod[[i]], post = T, unconditional = T)
pl <- plot(vizmod, nsim = 20, allTerms = T) +
# Add confidence interval line/ fit line/ simulation line
l_ciLine() + l_fitLine() + l_simLine() +
#Add confidence interval bar/# Add fitted points
l_ciBar() + l_fitPoints(size = 1) +
l_rug() + # Add rug plot
# Add title with the name of the current species
labs(title = names(res_mod)[i])
# Print the plot to the R console only for the first 2 species
if (i <= 2) {
print(pl, pages = 1)
}
}
# Close the PDF file
dev.off()quartz_off_screen
2
In this section we illustrate the calculation of the average marginal effects, including both the absolute Average Marginal Effect (aAME) and the relative Average Marginal Effect (rAME).
Here, AMEs are computed by determining the marginal effect (essentially the partial derivative or slope) of a predictor for a unit change in conspecific density at each observed value, and then averaging these effects. This method yields a single, interpretable measure that offers an averaged estimate of the predictor’s impact. It’s worth noting that the AME, compared to traditional effect sizes, provides a clearer measure of a predictor’s influence by quantifying the average change in the outcome for each unit change in the predictor rather than the raw coefficient (effect size) that may be influenced by the scale of the variable or confounded by interactions with other predictors. In the analyses illustrated here, we are interested in calculating the average marginal effect of conspecific density on mortality.
Furthermore, rAMEs enhance the level of interpretability by delivering a normalized measure of these averaged effects. They represent the percentage change in the response variable due to a unit change in the predictor relative to the base mortality, providing an intuitive, relative grasp of the predictor’s influence. This normalization process allows for the comparison of effects across different species, each with its own base level of mortality.
To calculate aAMEs and rAMEs we need the file “res_model” that includes all the models results.
There is also alternative approaches for calculating average marginal effects using ‘marginaleffects’ package that we encourage the user to explore.
Here we provide three scenarios for calculating aAMEs or rAMEs through the change argument.
Equilibrium: This scenario computes aAMEs or rAMEs for a unit increase in conspecific density of above observed values, providing insight into the marginal effect of density increase in an existing ecosystem. Invasion: This scenario models the effects of introducing species into a new community in which it did not previously occur, transitioning the conspecific density from zero to a specified unit, helping understand the impact of sudden species introductions linking to theoretical considerations from coexistence theory (Chesson 2000). IQR: This scenario evaluates aAMEs or rAMEs within the middle 50% range of conspecific density, offering a perspective on the ecological relevance of CDD within typical density ranges, however without being comparable among species.
#### chose predictors for local density -setting AMEs
# Define a vector of predictors with their corresponding names
predictors <- c(con_dens = "con_dens",
total_dens = "total_dens")
# change in conspecific density for AME calculations----
# One more neighbor seedling (not for adult trees)
additive=1
# set interval to 1 for annual mortality probabilities
interval = 1
# Specify how the conspecific density should be changed for
# different scenarios:
# 'equilibrium', 'invasion', and 'iqr' (interquartile range)
change = list(equilibrium = data.frame(con_dens =
"paste('+', additive)")
, invasion = data.frame(con_dens = "c(0, additive)")
, iqr = data.frame(con_dens = "c(q1, q3)")
)
## Set the number of iterations for any subsequent calculations or
# simulations
iter = 500 This section will write and define two functions. The setstep function and get_AME function. Get_AME is a function to calculate the Average Marginal Effects for a given term in a model. The function first creates two copies of the data, d0 and d1, and adjusts the term of interest based on the change argument (one of the three scenarios described in the previous section). It then calculates the predictions for the two data sets and computes the marginal effects.
# Define a function to set the step size for numerical derivatives
# This function determines an appropriate step size using machine's
# precision/machine epsilon to strike a balance between accuracy and
# rounding errors.
setstep = function(x) {
eps = .Machine$double.eps
return(x + (max(abs(x), 1, na.rm = TRUE) * sqrt(eps)) - x)
}
# Function to compute absolute and relative average marginal effects
# (AME and rAME) for a given model--------------------
get_AME = function(mod, data, term
, change = NULL
, at = NULL
, offset = 1
, relative = F
, iterations = 1000
, seed = 10
, samples = F) {
# Prepare two dataframes for different scenarios in marginal
# effect computation
d0 = d1 = data
# Adjust the 'term' in the data based on the 'change' parameter
if (is.null(change)) {
# If change is NULL, adjust the term for numerical derivative
# computation
d0[[term]] = d0[[term]] - setstep(d0[[term]])
d1[[term]] = d1[[term]] + setstep(d1[[term]])
}
# If change has an additive component, adjust the term accordingly
if (grepl("\\+", paste(change, collapse = "_"))) {
d1[[term]] = d1[[term]] + as.numeric(gsub("\\+", "", change))
}
# If change is explicit with two values, set the term values
# directly
if (length(change) == 2) {
d0[[term]] = as.numeric(change[1])
d1[[term]] = as.numeric(change[2])
}
# If 'at' is specified, set predictor values in the data to these
# fixed values
# (allows the function to calculate the marginal effects at the
# specified values)
if (!is.null(at)) {
for (i in names(at))
d0[[i]] = at[[i]]
d1[[i]] = at[[i]]
}
# Create matrices for prediction based on the model
Xp0 <- predict(mod, newdata = d0, type="lpmatrix")
Xp1 <- predict(mod, newdata = d1, type="lpmatrix")
# Extract model parameters
ilink <- family(mod)$linkinv
beta <- coef(mod)
vc <- mod$Vc # covariance matrix
# Compute marginal effects based on the adjusted data
pred0 <- 1 - (1-ilink(Xp0 %*% beta))^offset
pred1 <- 1 - (1-ilink(Xp1 %*% beta))^offset
ME <- (pred1-pred0)
# Adjust for numerical derivative if change is NULL
if (is.null(change)) {
ME <- ME/(d1[[term]] - d0[[term]])
}
# convert to relative if requested
if (relative == T) ME = ME/pred0
# average marginal effect
AME = mean(ME)
# Simulate AMEs to compute uncertainty in the estimates
# Compute the variance of the average marginal effect through a
# "posterior" simulation.
# This involves simulating from a multivariate normal distribution
# using the model's
#coefficient means and covariance matrix
if (!is.null(seed)) set.seed(seed)
coefmat = mvrnorm(n = iterations
, mu = beta
, Sigma = vc)
# For each simulated coefficient vector, estimate the Average
# Marginal Effect (AME).
AMEs = apply(coefmat, 1, function(coefrow) {
# Calculate marginal effects based on the simulated coefficient
pred0 <- 1 - (1-ilink(Xp0 %*% coefrow))^offset
pred1 <- 1 - (1-ilink(Xp1 %*% coefrow))^offset
ME <- (pred1-pred0)
# if change is NULL, use numerical derivative
if (is.null(change)) {
ME <- ME/(d1[[term]] - d0[[term]])
}
# convert to relative if requested
if (relative == T) ME = ME/pred0
# average marginal effect
AME = mean(ME)
return(AME)
})
# Combine results
# If the 'samples' flag is FALSE, return the summary results.
# Otherwise, return both the summary and the sample results.
if (!samples) {
res = data.frame(term
, estimate = AME
, std.error = sqrt(var(AMEs))
, estimate.sim = mean(AMEs)
, offset
, change.value = paste(change, collapse = "_"))
return(res)
} else {
res_sums = data.frame(term
, estimate = AME
, std.error = sqrt(var(AMEs))
, offset
, change.value = paste(change, collapse = "_"))
res_samples = data.frame(term
, estimate = AMEs
, MLE = AME
, offset
, change.value = paste(change,
collapse = "_"))
res = list(res_sums, res_samples)
return(res)
}
}At the end of this code segment, the aAME data frame contains average marginal estimates for each predictor, each corresponding to different change scenarios. In essence, aAME provides the average effect that a predictor has on the outcome across these scenarios. On the other hand, the aAMEsamples data frame contains multiple samples of aAME for each predictor, each aligned with a specific type of change scenario, which allows for an evaluation of the uncertainty inherent in the aAME estimates.
# Absolute AMEs ---------------------------------------------------
# Initialize empty data frames to store the results
aAME = data.frame()
aAMEsamples = data.frame()
# Loop through predictor names that match "con_"
for (i in names(predictors)[grepl("con_", names(predictors))]) {
# Loop through different change settings (e.g., equilibrium, invasion,
# iqr)
for (j in names(change)) {
# Calculate the AME for each model in res_mod
temp = lapply(res_mod, function(x){
# If the change is based on IQR (interquartile range), calculate the
# 1st and 3rd quartiles
if (j == "iqr") {
q1 = quantile(x$model$con_dens, probs = 0.25)
q3 = quantile(x$model$con_dens, probs = 0.75)
}
# Use the get_AME function to calculate the AME for the current model
get_AME(x
, data = x$model
, offset = interval
, term = i
, change = eval(parse(text = change[[j]][,i]))
, iterations = iter
, samples = T
)
}
)
# AME
tempAME = lapply(temp, function(x) x[[1]])
tempAME = Map(cbind, tempAME, change = j, sp = names(tempAME))
tempAME = do.call(rbind, tempAME)
aAME = rbind(aAME, tempAME)
# AME samples
tempSamples = lapply(temp, function(x) x[[2]])
tempSamples = Map(cbind, tempSamples, change = j,
sp = names(tempSamples), iter = iter)
tempSamples = do.call(rbind, tempSamples)
aAMEsamples = rbind(aAMEsamples, tempSamples)
}
}
head(aAME) term estimate std.error offset change.value change sp
ACALDI con_dens 0.017532523 0.016062221 1 + 1 equilibrium ACALDI
AEGICE con_dens 0.089367793 0.025499132 1 + 1 equilibrium AEGICE
BEILPE con_dens 0.006043419 0.001063042 1 + 1 equilibrium BEILPE
CAPPFR con_dens 0.020944544 0.004140303 1 + 1 equilibrium CAPPFR
CECRIN con_dens 0.028621858 0.027033954 1 + 1 equilibrium CECRIN
CORDLA con_dens 0.013464775 0.030698485 1 + 1 equilibrium CORDLA
At the end of this code segment, the rAME data frame contains therAME estimates for the predictor and each type of change, and the rAMEsamples data frame contains the rAME samples for each predictor and type of change.
# Relative rAMEs -----------------------------------------------------------
# Initialize empty data frames to store the results
rAME = data.frame()
rAMEsamples = data.frame()
# Loop through predictor names that match "con_"
for (i in names(predictors)[grepl("con_", names(predictors))]) {
# Loop through different change settings
# (e.g., equilibrium, invasion, iqr)
for (j in names(change)) {
# Calculate the relative AME (rAME) for each model in res_mod
temp = lapply(res_mod, function(x){
# If the change is based on IQR (interquartile range),
# calculate the 1st and 3rd quartiles
if (j == "iqr") {
q1 = quantile(x$model$con_dens, probs = 0.25)
q3 = quantile(x$model$con_dens, probs = 0.75)
}
# Use the get_AME function to calculate the rAME for the
# current model, setting the 'relative' argument to TRUE
get_AME(x
, data = x$model
, offset = interval
, term = i
, change = eval(parse(text = change[[j]][, i]))
, iterations = iter
, relative = T
, samples = T
)
}
)
# rAME
tempAME = lapply(temp, function(x) x[[1]])
tempAME = Map(cbind, tempAME, change = j, sp = names(tempAME))
tempAME = do.call(rbind, tempAME)
rAME = rbind(rAME, tempAME)
# rAME samples
tempSamples = lapply(temp, function(x) x[[2]])
tempSamples = Map(cbind, tempSamples, change = j,
sp = names(tempSamples), iter = iter)
tempSamples = do.call(rbind, tempSamples)
rAMEsamples = rbind(rAMEsamples, tempSamples)
}
}
head(rAME) term estimate std.error offset change.value change sp
ACALDI con_dens 0.12190548 0.11757479 1 + 1 equilibrium ACALDI
AEGICE con_dens 0.46332386 0.13830487 1 + 1 equilibrium AEGICE
BEILPE con_dens 0.02026319 0.00362583 1 + 1 equilibrium BEILPE
CAPPFR con_dens 0.42931069 0.08161156 1 + 1 equilibrium CAPPFR
CECRIN con_dens 0.04121409 0.04242789 1 + 1 equilibrium CECRIN
CORDLA con_dens 0.10440629 0.24200786 1 + 1 equilibrium CORDLA
# Save results ---------------------------------------------------
save(list = c("aAME", "aAMEsamples", "rAME", "rAMEsamples", "nsp",
"coefs", "sums") # "nsp_data_deficient"
, file = paste0( "./data/mortality.Rdata"))
write.csv(aAME, "./data/aAME.csv")
write.csv(rAME, "./data/rAME.csv")